Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE/ACM Trans Comput Biol Bioinform ; 20(6): 3863-3875, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37878431

RESUMO

Few-Shot Molecular Property Prediction (FSMPP) is an improtant task on drug discovery, which aims to learn transferable knowledge from base property prediction tasks with sufficient data for predicting novel properties with few labeled molecules. Its key challenge is how to alleviate the data scarcity issue of novel properties. Pretrained Graph Neural Network (GNN) based FSMPP methods effectively address the challenge by pre-training a GNN from large-scale self-supervised tasks and then finetuning it on base property prediction tasks to perform novel property prediction. However, in this paper, we find that the GNN finetuning step is not always effective, which even degrades the performance of pretrained GNN on some novel properties. This is because these molecule-property relationships among molecules change across different properties, which results in the finetuned GNN overfits to base properties and harms the transferability performance of pretrained GNN on novel properties. To address this issue, in this paper, we propose a novel Adaptive Transfer framework of GNN for FSMPP, called ATGNN, which transfers the knowledge of pretrained and finetuned GNNs in a task-adaptive manner to adapt novel properties. Specifically, we first regard the pretrained and finetuned GNNs as model priors of target-property GNN. Then, a task-adaptive weight prediction network is designed to leverage these priors to predict target GNN weights for novel properties. Finally, we combine our ATGNN framework with existing FSMPP methods for FSMPP. Extensive experiments on four real-world datasets, i.e., Tox21, SIDER, MUV, and ToxCast, show the effectiveness of our ATGNN framework.


Assuntos
Descoberta de Drogas , Redes Neurais de Computação
2.
Neural Netw ; 168: 256-271, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37774512

RESUMO

As a pixel-wise dense forecast task, video prediction is challenging due to its high computation complexity, dramatic future uncertainty, and extremely complicated spatial-temporal patterns. Many deep learning methods are proposed for the task, which bring up significant improvements. However, they focus on modeling short-term spatial-temporal dynamics and fail to sufficiently exploit long-term ones. As a result, the methods tend to deliver unsatisfactory performance for a long-term forecast requirement. In this article, we propose a novel unified memory network (UNIMEMnet) for long-term video prediction, which can effectively exploit long-term motion-appearance dynamics and unify the short-term spatial-temporal dynamics and long-term ones in an architecture. In the UNIMEMnet, a dual branch multi-scale memory module is carefully designed to extract and preserve long-term spatial-temporal patterns. In addition, a short-term spatial-temporal dynamics module and an alignment and fusion module are devised to capture and coordinate short-term motion-appearance dynamics with long-term ones from our designed memory module. Extensive experiments on five video prediction datasets from both synthetic and real-world scenarios are conducted, which validate the effectiveness and superiority of our proposed method UNIMEMnet over state-of-the-art methods.


Assuntos
Movimento (Física) , Incerteza
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...